Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
我们努力努力探索的任务很少,名为Insbestantial对象检测(IOD),该任务旨在以以下特征定位对象:(1)具有不明显的边界的无定形形状; (2)与周围环境相似; (3)颜色不存在。因此,在单个静态框架中区分不理性对象是更具挑战性的,而空间和时间信息的协作表示至关重要。因此,我们构建了一个由600个视频(141,017帧)组成的iod-video数据集,其中涵盖了各种距离,尺寸,可见性和不同光谱范围捕获的场景。此外,我们为IOD开发了一个时空聚合框架,其中部署了不同的骨架,并精心设计了时空聚合损失(Staloss),以利用沿时轴的一致性来利用一致性。在IOD-VIDEO数据集上进行的实验表明,时空聚集可以显着改善IOD的性能。我们希望我们的工作能够吸引进一步的研究,以完成这项有价值但充满挑战的任务。该代码将在:\ url {https://github.com/calayzhou/iod-video}上可用。
translated by 谷歌翻译
Continually learning to segment more and more types of image regions is a desired capability for many intelligent systems. However, such continual semantic segmentation suffers from the same catastrophic forgetting issue as in continual classification learning. While multiple knowledge distillation strategies originally for continual classification have been well adapted to continual semantic segmentation, they only consider transferring old knowledge based on the outputs from one or more layers of deep fully convolutional networks. Different from existing solutions, this study proposes to transfer a new type of information relevant to knowledge, i.e. the relationships between elements (Eg. pixels or small local regions) within each image which can capture both within-class and between-class knowledge. The relationship information can be effectively obtained from the self-attention maps in a Transformer-style segmentation model. Considering that pixels belonging to the same class in each image often share similar visual properties, a class-specific region pooling is applied to provide more efficient relationship information for knowledge transfer. Extensive evaluations on multiple public benchmarks support that the proposed self-attention transfer method can further effectively alleviate the catastrophic forgetting issue, and its flexible combination with one or more widely adopted strategies significantly outperforms state-of-the-art solutions.
translated by 谷歌翻译
由于不规则的形状,正常和感染组织之间的各种尺寸和无法区分的边界,仍然是一种具有挑战性的任务,可以准确地在CT图像上进行Covid-19的感染病变。在本文中,提出了一种新的分段方案,用于通过增强基于编码器 - 解码器架构的不同级别的监督信息和融合多尺度特征映射来感染Covid-19。为此,提出了深入的协作监督(共同监督)计划,以指导网络学习边缘和语义的特征。更具体地,首先设计边缘监控模块(ESM),以通过将边缘监督信息结合到初始阶段的下采样的初始阶段来突出显示低电平边界特征。同时,提出了一种辅助语义监督模块(ASSM)来加强通过将掩码监督信息集成到稍后阶段来加强高电平语义信息。然后,通过使用注意机制来扩展高级和低电平特征映射之间的语义间隙,开发了一种注意融合模块(AFM)以融合不同级别的多个规模特征图。最后,在四个各种Covid-19 CT数据集上证明了所提出的方案的有效性。结果表明,提出的三个模块都是有希望的。基于基线(RESUNT),单独使用ESM,ASSM或AFM可以分别将骰子度量增加1.12 \%,1.95 \%,1.63 \%,而在我们的数据集中,通过将三个模型结合在一起可以上升3.97 \% 。与各个数据集的现有方法相比,所提出的方法可以在某些主要指标中获得更好的分段性能,并可实现最佳的泛化和全面的性能。
translated by 谷歌翻译
亚洲巨大的大黄蜂(AGH)出现在华盛顿州,似乎具有生物侵入的潜在危险。华盛顿州收集了公众照片和检测到的昆虫的视频,以进行验证和进一步调查。在本文中,我们分析了使用数据分析,统计,离散数学和深度学习技术来处理数据的数据分析,统计技术来处理数据。首先,我们可视化华盛顿州昆虫的地理分布。然后,我们将昆虫群体调查到今年的不同月和一个月不同的日子。第三,我们采用小波分析来检查AGH的周期性蔓延。第四,我们应用普通微分方程以检查不同自然生长率和反应速度的AGH数字,输出电位传播系数。接下来,我们利用蜂窝自动机结合潜在的传播系数来模拟改变电位传播下的地理差异。要更新模型,我们使用延迟微分方程来模拟人为干预。我们使用检测时间和提交时间之间的时间差来确定延迟时间的时间单位。之后,我们构建一个名为Sheeezenet的轻量级CNN,并评估其分类性能。然后,我们涉及几个非参考图像质量指标,包括NIQE,图像梯度,熵,对比度和Topsis来判断错误分类的原因。此外,我们建立一个随机林分类器,仅基于图像质量来识别正面和阴性样本。我们还显示了该特征重要性并进行错误分析。此外,我们呈现敏感性分析以验证模型的稳健性。最后,我们展示了我们模型的优势和缺点,并得出了结论。
translated by 谷歌翻译
多种子体形成以及障碍物避免是多助理系统领域最受研究的主题之一。虽然一些经典控制器等模型预测控制(MPC)和模糊控制实现了一定的成功措施,但大多数都需要在恶劣环境中无法访问的精确全局信息。另一方面,一些基于加强学习(RL)的方法采用了领导者 - 跟随器结构来组织不同的代理行为,这使得造成诸如机动性和鲁棒性的瓶颈之间的代理之间的合作。在本文中,我们提出了一种基于多功能钢筋学习(Marl)的分布式形成和障碍避免方法。我们系统中的代理只能利用本地和相关信息来分发决策和控制自己。在多代理系统中的代理将在任何断开连接的情况下快速重新组织到新的拓扑中。与基线(经典控制方法和其他基于RL的方法)相比,我们的方法实现了更好的形成误差,形成收敛速度和障碍物的成功率的成功率。通过使用Ackermann-tenting车辆的模拟和硬件实现来验证我们的方法的可行性。
translated by 谷歌翻译
在本文中,我们提出了一个大型详细的3D面部数据集,FACESCAPE和相应的基准,以评估单视图面部3D重建。通过对FACESCAPE数据进行训练,提出了一种新的算法来预测从单个图像输入的精心索引3D面模型。 FACESCAPE DataSet提供18,760个纹理的3D面,从938个科目捕获,每个纹理和每个特定表达式。 3D模型包含孔径级面部几何形状,也被处理为拓扑均匀化。这些精细的3D面部模型可以表示为用于详细几何的粗糙形状和位移图的3D可线模型。利用大规模和高精度的数据集,进一步提出了一种使用深神经网络学习特定于表达式动态细节的新颖算法。学习的关系是从单个图像输入的3D面预测系统的基础。与以前的方法不同,我们的预测3D模型在不同表达式下具有高度详细的几何形状。我们还使用FACESCAPE数据来生成野外和实验室内基准,以评估最近的单视面重建方法。报告并分析了相机姿势和焦距的尺寸,并提供了忠诚和综合评估,并揭示了新的挑战。前所未有的数据集,基准和代码已被释放到公众以进行研究目的。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Accurate determination of a small molecule candidate (ligand) binding pose in its target protein pocket is important for computer-aided drug discovery. Typical rigid-body docking methods ignore the pocket flexibility of protein, while the more accurate pose generation using molecular dynamics is hindered by slow protein dynamics. We develop a tiered tensor transform (3T) algorithm to rapidly generate diverse protein-ligand complex conformations for both pose and affinity estimation in drug screening, requiring neither machine learning training nor lengthy dynamics computation, while maintaining both coarse-grain-like coordinated protein dynamics and atomistic-level details of the complex pocket. The 3T conformation structures we generate are closer to experimental co-crystal structures than those generated by docking software, and more importantly achieve significantly higher accuracy in active ligand classification than traditional ensemble docking using hundreds of experimental protein conformations. 3T structure transformation is decoupled from the system physics, making future usage in other computational scientific domains possible.
translated by 谷歌翻译